123 research outputs found
Node Embedding over Temporal Graphs
In this work, we present a method for node embedding in temporal graphs. We
propose an algorithm that learns the evolution of a temporal graph's nodes and
edges over time and incorporates this dynamics in a temporal node embedding
framework for different graph prediction tasks. We present a joint loss
function that creates a temporal embedding of a node by learning to combine its
historical temporal embeddings, such that it optimizes per given task (e.g.,
link prediction). The algorithm is initialized using static node embeddings,
which are then aligned over the representations of a node at different time
points, and eventually adapted for the given task in a joint optimization. We
evaluate the effectiveness of our approach over a variety of temporal graphs
for the two fundamental tasks of temporal link prediction and multi-label node
classification, comparing to competitive baselines and algorithmic
alternatives. Our algorithm shows performance improvements across many of the
datasets and baselines and is found particularly effective for graphs that are
less cohesive, with a lower clustering coefficient
BASiS: Batch Aligned Spectral Embedding Space
Graph is a highly generic and diverse representation, suitable for almost any
data processing problem. Spectral graph theory has been shown to provide
powerful algorithms, backed by solid linear algebra theory. It thus can be
extremely instrumental to design deep network building blocks with spectral
graph characteristics. For instance, such a network allows the design of
optimal graphs for certain tasks or obtaining a canonical orthogonal
low-dimensional embedding of the data. Recent attempts to solve this problem
were based on minimizing Rayleigh-quotient type losses. We propose a different
approach of directly learning the eigensapce. A severe problem of the direct
approach, applied in batch-learning, is the inconsistent mapping of features to
eigenspace coordinates in different batches. We analyze the degrees of freedom
of learning this task using batches and propose a stable alignment mechanism
that can work both with batch changes and with graph-metric changes. We show
that our learnt spectral embedding is better in terms of NMI, ACC, Grassman
distance, orthogonality and classification accuracy, compared to SOTA. In
addition, the learning is more stable.Comment: 14 pages, 10 figure
Light generation via quantum interaction of electrons with periodic nanostructures
The Smith-Purcell effect is a hallmark of light-matter interactions in periodic structures, resulting in light emission with distinct spectral and angular distribution. We find yet undiscovered effects in Smith-Purcell radiation that arise due to the quantum nature of light and matter, through an approach based on exact energy and momentum conservation. The effects include emission cutoff, convergence of emission orders, and a possible second photoemission process, appearing predominantly in structures with nanoscale periodicities (a few tens of nanometers or less), accessible by recent nanofabrication advances. We further present ways to manipulate the effects by varying the geometry or by accounting for a refractive index. Our derivation emphasizes the fundamental relation between Smith-Purcell radiation and ÄŒerenkov radiation, and paves the way to alternative kinds of light sources wherein nonrelativistic electrons create Smith-Purcell radiation in nanoscale, on-chip devices. Finally, the path towards experimental realizations of these effects is discussed
On the Hardness of Category Tree Construction
Category trees, or taxonomies, are rooted trees where each node, called a category, corresponds to a set of related items. The construction of taxonomies has been studied in various domains, including e-commerce, document management, and question answering. Multiple algorithms for automating construction have been proposed, employing a variety of clustering approaches and crowdsourcing. However, no formal model to capture such categorization problems has been devised, and their complexity has not been studied. To address this, we propose in this work a combinatorial model that captures many practical settings and show that the aforementioned empirical approach has been warranted, as we prove strong inapproximability bounds for various problem variants and special cases when the goal is to produce a categorization of the maximum utility.
In our model, the input is a set of n weighted item sets that the tree would ideally contain as categories. Each category, rather than perfectly match the corresponding input set, is allowed to exceed a given threshold for a given similarity function. The goal is to produce a tree that maximizes the total weight of the sets for which it contains a matching category. A key parameter is an upper bound on the number of categories an item may belong to, which produces the hardness of the problem, as initially each item may be contained in an arbitrary number of input sets.
For this model, we prove inapproximability bounds, of order ??(?n) or ??(n), for various problem variants and special cases, loosely justifying the aforementioned heuristic approach. Our work includes reductions based on parameterized randomized constructions that highlight how various problem parameters and properties of the input may affect the hardness. Moreover, for the special case where the category must be identical to the corresponding input set, we devise an algorithm whose approximation guarantee depends solely on a more granular parameter, allowing improved worst-case guarantees. Finally, we also generalize our results to DAG-based and non-hierarchical categorization
Explaining Predictive Uncertainty with Information Theoretic Shapley Values
Researchers in explainable artificial intelligence have developed numerous
methods for helping users understand the predictions of complex supervised
learning models. By contrast, explaining the of model
outputs has received relatively little attention. We adapt the popular Shapley
value framework to explain various types of predictive uncertainty, quantifying
each feature's contribution to the conditional entropy of individual model
outputs. We consider games with modified characteristic functions and find deep
connections between the resulting Shapley values and fundamental quantities
from information theory and conditional independence testing. We outline
inference procedures for finite sample error rate control with provable
guarantees, and implement an efficient algorithm that performs well in a range
of experiments on real and simulated data. Our method has applications to
covariate shift detection, active learning, feature selection, and active
feature-value acquisition
Explaining recommendations in an interactive hybrid social recommender
Hybrid social recommender systems use social relevance from multiple sources to recommend relevant items or people to users. To make hybrid recommendations more transparent and controllable, several researchers have explored interactive hybrid recommender interfaces, which allow for a user-driven fusion of recommendation sources. In this field of work, the intelligent user interface has been investigated as an approach to increase transparency and improve the user experience. In this paper, we attempt to further promote the transparency of recommendations by augmenting an interactive hybrid recommender interface with several types of explanations. We evaluate user behavior patterns and subjective feedback by a within-subject study (N=33). Results from the evaluation show the effectiveness of the proposed explanation models. The result of post-treatment survey indicates a significant improvement in the perception of explainability, but such improvement comes with a lower degree of perceived controllability
- …